首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   14178篇
  免费   2120篇
  国内免费   1735篇
电工技术   590篇
技术理论   4篇
综合类   906篇
化学工业   420篇
金属工艺   95篇
机械仪表   333篇
建筑科学   270篇
矿业工程   100篇
能源动力   209篇
轻工业   359篇
水利工程   147篇
石油天然气   105篇
武器工业   91篇
无线电   2522篇
一般工业技术   657篇
冶金工业   190篇
原子能技术   134篇
自动化技术   10901篇
  2024年   45篇
  2023年   350篇
  2022年   487篇
  2021年   598篇
  2020年   681篇
  2019年   538篇
  2018年   530篇
  2017年   631篇
  2016年   765篇
  2015年   894篇
  2014年   1413篇
  2013年   1302篇
  2012年   1318篇
  2011年   1246篇
  2010年   788篇
  2009年   768篇
  2008年   913篇
  2007年   928篇
  2006年   682篇
  2005年   656篇
  2004年   493篇
  2003年   429篇
  2002年   335篇
  2001年   280篇
  2000年   182篇
  1999年   156篇
  1998年   106篇
  1997年   80篇
  1996年   80篇
  1995年   55篇
  1994年   59篇
  1993年   32篇
  1992年   37篇
  1991年   22篇
  1990年   19篇
  1989年   15篇
  1988年   13篇
  1987年   9篇
  1986年   11篇
  1985年   9篇
  1984年   5篇
  1983年   5篇
  1981年   5篇
  1980年   5篇
  1979年   6篇
  1977年   5篇
  1960年   3篇
  1959年   7篇
  1956年   3篇
  1951年   3篇
排序方式: 共有10000条查询结果,搜索用时 16 毫秒
21.
The increasing demand for low power consumption and high computational performance is outpacing available technological improvements in embedded systems. Approximate computing is a novel design paradigm trying to bridge this gap by leveraging the inherent error resilience of certain applications and trading in quality to achieve reductions in resource usage. Numerous approximation methods have emerged in this research field. While these methods are commonly demonstrated in isolation, their combination can increase the achieved benefits in complex systems. However, the propagation of errors throughout the system necessitates a global optimization of parameters, leading to an exponentially growing design space. Additionally, the parameterization of approximated components must consider potential cross-dependencies between them. This work proposes a systematic approach to integrate and optimally configure parameterizable approximate components in FPGA-based applications, focusing on low-level but high-bandwidth image processing pipelines. The design space is explored by a multi-objective genetic algorithm which takes parameter dependencies between different components into account. During the exploration, appropriate models are used to estimate the quality-resource trade-off for probed solutions without the need for time-consuming synthesis. We demonstrate and evaluate the effectiveness of our approach on two image processing applications that employ multiple approximations. The experimental results show that the proposed methods are able to produce a wide range of Pareto-optimal solutions, offering various choices regarding the desired quality-resource trade-off.  相似文献   
22.
Nowadays Deep Learning is applied in almost every research field and helps getting amazing results in a great number of challenging tasks. The main problem is that this kind of learning and consequently Neural Networks that can be defined deep, are resource intensive. They need specialized hardware to perform computation in a reasonable time. Many tasks are mandatory to be as much real-time as possible . It is needed to optimize many components such as code, algorithms, numeric accuracy and hardware, to make them “efficient and usable”. All these optimizations can help us to produce incredibly accurate and fast learning models. The paper reports a study in this direction for the challenging face detection and emotion recognition tasks.  相似文献   
23.
Accurate prognosis of limited durability is one of the key factors for the commercialization of proton exchange membrane fuel cell (PEMFC) on a large scale. Thanks to ignoring the structure of the PEMFC and simplifying the prognostic process, the data-driven prognostic approaches was the commonly used for predicting remaining useful life (RUL) at present. In this paper, the proposed cycle reservoir with jump (CRJ) model improves the ESN model, changes the connection mode of neurons in the reservoir and speeds up the linear fitting process. The experiment will verify the performance of CRJ model to predict stacks voltage under static current and quasi-dynamic current conditions. In addition, the reliability of the CRJ model is verified with different amount of data as the training and test sets. The experimental results demonstrate that the CRJ model can achieve better effect in the remaining useful life prognosis of fuel cells.  相似文献   
24.
Our primary research hypothesis stands on a simple idea: The evolution of top-rated publications on a particular theme depends heavily on the progress and maturity of related topics. And this even when there are no clear relations or some concepts appear to cease to exist and leave place for newer ones starting many years ago. We implemented our model based on Computer Science Ontology (CSO) and analyzed 44 years of publications. Then we derived the most important concepts related to Cloud Computing (CC) from the scientific collection offered by Clarivate Analytics. Our methodology includes data extraction using advanced web crawling techniques, data preparation, statistical data analysis, and graphical representations. We obtained related concepts after aggregating the scores using the Jaccard coefficient and CSO Ontology. Our article reveals the contribution of Cloud Computing topics in research papers in leading scientific journals and the relationships between the field of Cloud Computing and the interdependent subdivisions identified in the broader framework of Computer Science.  相似文献   
25.
Event sequences and time series are widely recorded in many application domains; examples are stock market prices, electronic health records, server operation and performance logs. Common goals for recording are monitoring, root cause analysis and predictive analytics. Current analysis methods generally focus on the exploration of either event sequences or time series. However, deeper insights are gained by combining both. We present a visual analytics approach where users can explore both time series and event data simultaneously, combining visualization, automated methods and human interaction. We enable users to iteratively refine the visualization. Correlations between event sequences and time series can be found by means of an interactive algorithm, which also computes the presence of monotonic effects. We illustrate the effectiveness of our method by applying it to real world and synthetic data sets.  相似文献   
26.
Visual data analysis can be envisioned as a collaboration of the user and the computational system with the aim of completing a given task. Pursuing an effective system‐user integration, in which the system actively helps the user to reach his/her analysis goal has been focus of visualization research for quite some time. However, this problem is still largely unsolved. As a result, users might be overwhelmed by powerful but complex visual analysis systems which also limits their ability to produce insightful results. In this context, guidance is a promising step towards enabling an effective mixed‐initiative collaboration to promote the visual analysis. However, the way how guidance should be put into practice is still to be unravelled. Thus, we conducted a comprehensive literature research and provide an overview of how guidance is tackled by different approaches in visual analysis systems. We distinguish between guidance that is provided by the system to support the user, and guidance that is provided by the user to support the system. By identifying open problems, we highlight promising research directions and point to missing factors that are needed to enable the envisioned human‐computer collaboration, and thus, promote a more effective visual data analysis.  相似文献   
27.
针对软件定义网络中,控制器无法保证下发的网络策略能够在转发设备上得到正确执行的安全问题,提出一种新的转发路径监控安全方案。首先以控制器的全局视图能力为基础,设计了基于OpenFlow协议的路径凭据交互处理机制;然后采用哈希链和消息验证码作为生成和处理转发路径凭据信息的关键技术;最后在此基础上,对Ryu控制器和Open vSwitch开源交换机进行深度优化,添加相应处理流程,建立轻量级的路径安全机制。测试结果表明,该机制能够有效保证数据转发路径安全,吞吐量消耗比SDN数据层可信转发方案(SDNsec)降低20%以上,更适用于路径复杂的网络环境,但时延和CPU使用率的浮动超过15%,有待进一步优化。  相似文献   
28.
王伦耀  夏银水  储著飞 《电子学报》2019,47(9):1868-1874
近似计算技术通过降低电路输出精度实现电路功耗、面积、速度等方面的优化.本文针对RM(Reed-Muller)逻辑中"异或"运算特点,提出了基于近似计算技术的适合FPRM逻辑的电路面积优化算法,包括基于不相交运算的RM逻辑错误率计算方法,及在错误率约束下,有利于面积优化的近似FPRM函数搜索方法等.优化算法用MCNC(Microelectronics Center of North Carolina)电路进行测试.实验结果表明,提出的算法可以处理输入变量个数为199个的大电路,在平均错误率为5.7%下,平均电路面积减少62.0%,并在实现面积优化的同时有利于实现电路的动态功耗的优化且对电路时延影响不大.  相似文献   
29.
针对高速数据传输及计算所带来时延和终端设备能耗问题,提出了一种在上行链路采用等功率分配的传输方案。首先,依据增强现实(AR)业务的协作属性建立了针对AR特性的系统模型;其次,详细分析了系统帧结构,建立以最小化系统消耗总能量为优化目标的约束条件;最后,在保障延迟和功耗满足约束的条件下,建立了基于凸优化的移动边缘计算(MEC)资源优化求解数学模型,从而获得最优的通信和计算资源分配方案。与独立传输相比,该方案在最大延迟时间分别为0.1 s和0.15 s时的总能耗降幅均为14.6%。仿真结果表明,在相同条件下,与基于用户独立传输的优化方案相比,考虑用户间协作传输的等功率MEC优化方案能显著减少系统消耗的总能量。  相似文献   
30.
As accessing computing resources from the remote cloud inherently incurs high end-to-end (E2E) delay for mobile users, cloudlets, which are deployed at the edge of a network, can potentially mitigate this problem. Although some research works focus on allocating workloads among cloudlets, the cloudlet placement aiming to minimize the deployment cost (i.e., consisting of both the cloudlet cost and average E2E delay cost) has not been addressed effectively so far. The locations and number of cloudlets have a crucial impact on both the cloudlet cost in the network and average E2E delay of users. Therefore, in this paper, we propose the Cost Aware cloudlet PlAcement in moBiLe Edge computing (CAPABLE) strategy, where both the cloudlet cost and average E2E delay are considered in the cloudlet placement. To solve this problem, a Lagrangian heuristic algorithm is developed to achieve the suboptimal solution. After cloudlets are placed in the network, we also design a workload allocation scheme to minimize the E2E delay between users and their cloudlets by considering the user mobility. The performance of CAPABLE has been validated by extensive simulations.   相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号